59 research outputs found

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprĂŒfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom fĂŒr synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen PrĂ€dikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt fĂŒhren wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick ĂŒber die Definition von verschiedenen Berechnungsmodellen und erklĂ€ren maschinengeprĂŒfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprĂŒfte Unentscheidbarkeitsbeweise erlaubt. Wir erklĂ€ren solche Beweise fĂŒr die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-KalkĂŒl L als sweet spot fĂŒr die Programmierung in einem Berechnungsmodell. Wir fĂŒhren ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist

    Church's thesis and related axioms in Coq's type theory

    Full text link
    "Church's thesis" (CT\mathsf{CT}) as an axiom in constructive logic states that every total function of type N→N\mathbb{N} \to \mathbb{N} is computable, i.e. definable in a model of computation. CT\mathsf{CT} is inconsistent in both classical mathematics and in Brouwer's intuitionism since it contradicts Weak K\"onig's Lemma and the fan theorem, respectively. Recently, CT\mathsf{CT} was proved consistent for (univalent) constructive type theory. Since neither Weak K\"onig's Lemma nor the fan theorem are a consequence of just logical axioms or just choice-like axioms assumed in constructive logic, it seems likely that CT\mathsf{CT} is inconsistent only with a combination of classical logic and choice axioms. We study consequences of CT\mathsf{CT} and its relation to several classes of axioms in Coq's type theory, a constructive type theory with a universe of propositions which does neither prove classical logical axioms nor strong choice axioms. We thereby provide a partial answer to the question which axioms may preserve computational intuitions inherent to type theory, and which certainly do not. The paper can also be read as a broad survey of axioms in type theory, with all results mechanised in the Coq proof assistant

    Hilbert's Tenth Problem in Coq (Extended Version)

    Get PDF
    We formalise the undecidability of solvability of Diophantine equations, i.e. polynomial equations over natural numbers, in Coq's constructive type theory. To do so, we give the first full mechanisation of the Davis-Putnam-Robinson-Matiyasevich theorem, stating that every recursively enumerable problem -- in our case by a Minsky machine -- is Diophantine. We obtain an elegant and comprehensible proof by using a synthetic approach to computability and by introducing Conway's FRACTRAN language as intermediate layer. Additionally, we prove the reverse direction and show that every Diophantine relation is recognisable by Ό\mu-recursive functions and give a certified compiler from Ό\mu-recursive functions to Minsky machines.Comment: submitted to LMC

    The weak call-by-value λ-calculus is reasonable for both time and space

    Get PDF
    We study the weak call-by-value -calculus as a model for computational complexity theory and establish the natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value -calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations terminating in (encodings of) ƂtrueĆŸ or ƂfalseĆŸ. The simulation yields that standard complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not cover sublinear time or space. Note that our measures still have the well-known size explosion property, where the space measure of a computation can be exponentially bigger than its time measure. However, our result implies that this exponential gap disappears once complexity classes are considered instead of concrete computations. We consider this result a first step towards a solution for the long-standing open problem of whether the natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value -calculus is the first proof of reasonability (including both time and space) for a functional language based on natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity classes, both on paper and in proof assistants. The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value -calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of log, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of log⁥n\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    A Mechanised Proof of the Time Invariance Thesis for the Weak Call-By-Value ?-Calculus

    Get PDF
    The weak call-by-value ?-calculus ?and Turing machines can simulate each other with a polynomial overhead in time. This time invariance thesis for L, where the number of ?-reductions of a computation is taken as its time complexity, is the culmination of a 25-years line of research, combining work by Blelloch, Greiner, Dal Lago, Martini, Accattoli, Forster, Kunze, Roth, and Smolka. The present paper presents a mechanised proof of the time invariance thesis for L, constituting the first mechanised equivalence proof between two standard models of computation covering time complexity. The mechanisation builds on an existing framework for the extraction of Coq functions to L and contributes a novel Hoare logic framework for the verification of Turing machines. The mechanised proof of the time invariance thesis establishes ?as model for future developments of mechanised computational complexity theory regarding time. It can also be seen as a non-trivial but elementary case study of time-complexity-preserving translations between a functional language and a sequential machine model. As a by-product, we obtain a mechanised many-one equivalence proof of the halting problems for ?and Turing machines, which we contribute to the Coq Library of Undecidability Proofs

    Constructive and Synthetic Reducibility Degrees: Post's Problem for Many-one and Truth-table Reducibility in Coq

    Get PDF
    International audienceWe present a constructive analysis and machine-checked theory of one-one, many-one, and truth-table reductions based on synthetic computability theory in the Calculus of Inductive Constructions, the type theory underlying the proof assistant Coq. We give elegant, synthetic, and machine-checked proofs of Post's landmark results that a simple predicate exists, is enumerable, undecidable, but many-one incomplete (Post's problem for many-one reducibility), and a hypersimple predicate exists, is enumerable, undecidable, but truth-table incomplete (Post's problem for truth-table reducibility). In synthetic computability, one assumes axioms allowing to carry out computability theory with all definitions and proofs purely in terms of functions of the type theory with no mention of a model of computation. Proofs can focus on the essence of the argument, without having to sacrifice formality. Synthetic computability also clears the lense for constructivisation. Our constructively careful definition of simple and hypersimple predicates allows us to not assume classical axioms, not even Markov's principle, still yielding the expected strong results
    • 

    corecore